Thomas Bayes | |
---|---|
Portrait used of Bayes in the 1936 book History of Life Insurance; it is dubious whether it actually depicts Bayes.[1] No earlier portrait or claimed portrait survives.
|
|
Born | c. 1701 London, England |
Died | 7 April 1761 Tunbridge Wells, Kent, England |
(aged 59)
Nationality | English |
Signature |
Thomas Bayes (pronounced: /ˈbeɪz/) (c. 1701 – 7 April 1761)[1][2][note a] was an English mathematician and Presbyterian minister, known for having formulated a specific case of the theorem that bears his name: Bayes' theorem. Bayes never published what would eventually become his most famous accomplishment; his notes were edited and published after his death by Richard Price.[3]
Contents |
Thomas Bayes was the son of London Presbyterian minister Joshua Bayes[4] and perhaps born in Hertfordshire.[5] In 1719, he enrolled at the University of Edinburgh to study logic and theology. On his return around 1722, he assisted his father at the latter's non-conformist chapel in London before moving to Tunbridge Wells, Kent around 1734. There he became minister of the Mount Sion chapel, until 1752.[6]
He is known to have published two works in his lifetime, one theological and one mathematical:
It is speculated that Bayes was elected as a Fellow of the Royal Society in 1742[7] on the strength of the Introduction to the Doctrine of Fluxions, as he is not known to have published any other mathematical works during his lifetime.
In his later years he took a deep interest in probability. Stephen Stigler feels that he became interested in the subject while reviewing a work written in 1755 by Thomas Simpson,[8] but George Alfred Barnard thinks he learned mathematics and probability from a book by de Moivre.[9] His work and findings on probability theory were passed in manuscript form to his friend Richard Price after his death.
By 1755 he was ill and in 1761 had died in Tunbridge. He was buried in Bunhill Fields Cemetery in Moorgate, London where many Nonconformists lie.
Bayes' solution to a problem of "inverse probability" was presented in the An Essay towards solving a Problem in the Doctrine of Chances which was read to the Royal Society in 1763 after Bayes's death. Richard Price shepherded the work through this presentation and its publication in the Philosophical Transactions of the Royal Society of London the following year. This was an argument for using a uniform prior distribution for a binomial parameter and not merely a general postulate.[10]
This essay contains a statement of a special case of Bayes' theorem.
In the first decades of the eighteenth century, many problems concerning the probability of certain events, given specified conditions, were solved. For example, given a specified number of white and black balls in an urn, what is the probability of drawing a black ball? Attention soon turned to the converse of such a problem: given that one or more balls has been drawn, what can be said about the number of white and black balls in the urn? These are sometimes called "inverse probability" problems. The Essay of Bayes contains his solution to a similar problem, posed by Abraham de Moivre, author of The Doctrine of Chances (1718).
In addition to the Essay Towards Solving a Problem, a paper on asymptotic series was published posthumously.
Bayesian probability is the name given to several related interpretations of probability, which have in common the notion of probability as something like a partial belief, rather than a frequency. This allows the application of probability to all sorts of propositions rather than just ones that come with a reference class. "Bayesian" has been used in this sense since about 1950. Since its rebirth in the 1950s, advancements in computing technology have allowed scientists from many disciplines to pair traditional Bayesian statistics with random walk techniques. The use of the Bayes theorem has been extended in science and in other fields.[11]
Bayes himself might not have embraced the broad interpretation now called Bayesian. It is difficult to assess Bayes' philosophical views on probability, since his essay does not go into questions of interpretation. There Bayes defines probability as follows (Definition 5).
In modern utility theory, expected utility can (with qualifications, because buying risk for small amounts or buying security for big amounts also happen) be taken as the probability of an event times the payoff received in case of that event. Rearranging that to solve for the probability, Bayes' definition results. As Stigler points out[8], this is a subjective definition, and does not require repeated events; however, it does require that the event in question be observable, for otherwise it could never be said to have "happened". Stigler argues that Bayes intended his results in a more limited way than modern Bayesians; given Bayes' definition of probability, his result concerning the parameter of a binomial distribution makes sense only to the extent that one can bet on its observable consequences.